Search results for "Law of large numbers"
showing 10 items of 10 documents
A True Extension of the Markov Inequality to Negative Random Variables
2020
The Markov inequality is a classical nice result in statistics that serves to demonstrate other important results as the Chebyshev inequality and the weak law of large numbers, and that has useful applications in the real world, when the random variable is unspecified, to know an upper bound for the probability that an variable differs from its expectation. However, the Markov inequality has one main flaw: its validity is limited to nonnegative random variables. In the very short note, we propose an extension of the Markov inequality to any non specified random variable. This result is completely new.
Scaling properties of topologically random channel networks
1996
Abstract The analysis deals with the scaling properties of infinite topologically random channel networks (ITRNs) fast introduced by Shreve (1967, J. Geol. , 75: 179–186) to model the branching structure of rivers as a random process. The expected configuration of ITRNs displays scaling behaviour only asymptotically, when the ruler (or ‘yardstick’) length is reduced to a very small extent. The random model can also reproduce scaling behaviour at larger ruler lengths if network magnitude and diameter are functionally related according to a reported deterministic rule. This indicates that subsets of rrRNs can be scaling and, although rrRNs are asymptotically plane-filling due to the law of la…
Law of the Iterated Logarithm
2020
For sums of independent random variables we already know two limit theorems: the law of large numbers and the central limit theorem. The law of large numbers describes for large \(n\in \mathbb{N}\) the typical behavior, or average value behavior, of sums of n random variables. On the other hand, the central limit theorem quantifies the typical fluctuations about this average value.
The “Gentle Law” of Large Numbers: Stifter’s Urban Meteorology
2020
Moments and Laws of Large Numbers
2020
The most important characteristic quantities of random variables are the median, expectation and variance. For large n, the expectation describes the typical approximate value of the arithmetic mean (X 1+…+X n )/n of independent and identically distributed random variables (law of large numbers).
The Vlasov Limit for a System of Particles which Interact with a Wave Field
2008
In two recent publications [Commun. PDE, vol.22, p.307--335 (1997), Commun. Math. Phys., vol.203, p.1--19 (1999)], A. Komech, M. Kunze and H. Spohn studied the joint dynamics of a classical point particle and a wave type generalization of the Newtonian gravity potential, coupled in a regularized way. In the present paper the many-body dynamics of this model is studied. The Vlasov continuum limit is obtained in form equivalent to a weak law of large numbers. We also establish a central limit theorem for the fluctuations around this limit.
Can the Adaptive Metropolis Algorithm Collapse Without the Covariance Lower Bound?
2011
The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step $n+1$ \[ S_n = Cov(X_1,...,X_n) + \epsilon I, \] that is, the sample covariance matrix of the history of the chain plus a (small) constant $\epsilon>0$ multiple of the identity matrix $I$. The lower bound on the eigenvalues of $S_n$ induced by the factor $\epsilon I$ is theoretically convenient, but practically cumbersome, as a good value for the parameter $\epsilon$ may not always be easy to choose. This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of $S_n$ away …
Estimating the geometric median in Hilbert spaces with stochastic gradient algorithms: Lp and almost sure rates of convergence
2016
The geometric median, also called L 1 -median, is often used in robust statistics. Moreover, it is more and more usual to deal with large samples taking values in high dimensional spaces. In this context, a fast recursive estimator has been introduced by Cardot et?al. (2013). This work aims at studying more precisely the asymptotic behavior of the estimators of the geometric median based on such non linear stochastic gradient algorithms. The L p rates of convergence as well as almost sure rates of convergence of these estimators are derived in general separable Hilbert spaces. Moreover, the optimal rates of convergence in quadratic mean of the averaged algorithm are also given.
An Adaptive Parallel Tempering Algorithm
2013
Parallel tempering is a generic Markov chainMonteCarlo samplingmethod which allows good mixing with multimodal target distributions, where conventionalMetropolis- Hastings algorithms often fail. The mixing properties of the sampler depend strongly on the choice of tuning parameters, such as the temperature schedule and the proposal distribution used for local exploration. We propose an adaptive algorithm with fixed number of temperatures which tunes both the temperature schedule and the parameters of the random-walk Metropolis kernel automatically. We prove the convergence of the adaptation and a strong law of large numbers for the algorithm under general conditions. We also prove as a side…
On the stability and ergodicity of adaptive scaling Metropolis algorithms
2011
The stability and ergodicity properties of two adaptive random walk Metropolis algorithms are considered. The both algorithms adjust the scaling of the proposal distribution continuously based on the observed acceptance probability. Unlike the previously proposed forms of the algorithms, the adapted scaling parameter is not constrained within a predefined compact interval. The first algorithm is based on scale adaptation only, while the second one incorporates also covariance adaptation. A strong law of large numbers is shown to hold assuming that the target density is smooth enough and has either compact support or super-exponentially decaying tails.